-
Notifications
You must be signed in to change notification settings - Fork 251
refactor: create 1-1 mapping between operations and AsmOp decorators #2455
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: next
Are you sure you want to change the base?
refactor: create 1-1 mapping between operations and AsmOp decorators #2455
Conversation
huitseeker
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the fix! This change needs a test that goes through the assembly path to verify the 1-1 mapping.
Existing tests in core/src/mast/tests.rs manually construct decorators like vec![(2, decorator_id)], which doesn't test that set_instruction_cycle_count() creates the mapping.
I would suggest a test in this PR in crates/assembly/src/tests.rs that:
- Assembles MASM code with multi-cycle instructions
- Verifies all operations in a multi-cycle range share the same DecoratorId
- Uses
MastForest::all_decorators()(which should be gated under thetestingfeature, andmiden-coreshould be imported as adev-dependencyofmiden-assemblyactivating that feature to get access to it)
huitseeker
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks for the test!
core/src/mast/mod.rs
Outdated
| // ================================================================================================ | ||
|
|
||
| #[cfg(test)] | ||
| #[cfg(feature = "testing")] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider using #[cfg(any(test, feature = "testing"))] instead of just #[cfg(feature = "testing")].
The current approach means that unit tests in the core crate itself cannot use all_decorators() when running cargo test without explicitly enabling the testing feature.
The suggested alternative is the standard pattern for test helpers that need to be shared across crates while still being available for local unit tests.
|
Thanks for the review! So i fixed test naming and also fixed fmt |
|
@PivasDesant - seems like build/tests are currently failing on this PR. Would you be able to fix this? |
2783845 to
d01a29d
Compare
d01a29d to
3fd08b0
Compare
d17c020 to
828caba
Compare
828caba to
8961202
Compare
| // to that AsmOp. We need to find the minimum op_idx for each unique AsmOp decorator | ||
| // to determine the start of its range, then check if target_op_idx falls within | ||
| // that range. | ||
| let mut asmop_ranges: BTreeMap<DecoratorId, (usize, usize)> = BTreeMap::new(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With the 1-1 mapping now in place, could we simplify this function? The current implementation builds a BTreeMap and does two passes, but since every operation is now directly linked to its AsmOp, we can query by index directly.
Something like:
pub fn get_assembly_op(
&self,
node_id: MastNodeId,
target_op_idx: Option<usize>,
) -> Option<&AssemblyOp> {
let node = &self[node_id];
let find_asmop = |ids: &[DecoratorId]| {
ids.iter().find_map(|&id| match &self[id] {
Decorator::AsmOp(asm_op) => Some(asm_op),
_ => None,
})
};
match target_op_idx {
Some(idx) => match node {
MastNode::Block(_) => find_asmop(self.decorators_for_operation(node_id, idx)),
_ => match idx {
0 => find_asmop(self.before_enter_decorators(node_id)),
1 => find_asmop(self.after_exit_decorators(node_id)),
_ => None,
},
},
None => {
// Search order: before_enter, then per-op (for blocks), then after_exit
find_asmop(self.before_enter_decorators(node_id))
.or_else(|| match node {
MastNode::Block(block) => (0..block.num_operations() as usize)
.find_map(|i| find_asmop(self.decorators_for_operation(node_id, i))),
_ => None,
})
.or_else(|| find_asmop(self.after_exit_decorators(node_id)))
}
}
}This removes the Box<dyn Iterator> and BTreeMap, and the common case (Block + target index) becomes a direct lookup.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm still at a loss as to why we're doing this in the first place. Bobbin's answer to this question was that it would simplify the decorator finding logic - but comparing this with what we currently have, I find them pretty equally complex. IMO, most of the complexity comes from the fact that we have "before enter", "after exit", and sometimes "operations" decorators. The num_cycles check is not really the issue - it only applies to "operations" decorators, and is honestly pretty simple.
In contrast, the clear downside of this new approach is increased memory usage.
Am I missing something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps it was worth it as an experiment which has shown it may be best to not push this one further? WDYT @bobbinth ?
Though I would add that the above is a bit complex to avoid allocations, and non-allocative code is always a bit involved.
Implements 1-1 mapping for operations and AsmOp decorators (#2446)
Before this change, only the first operation was linked to an AsmOp when an instruction compiled to multiple operations. Now all operations are linked.
The implementation adds decorator links for all operations in
set_instruction_cycle_count(). This also simplifiesget_assembly_op()since we can check index equality directly instead of doing range checks.No data duplication since we reuse the same
DecoratorIdfor all operations covered by an AsmOp.